4 - Deep Learning [ID:11279]
50 von 930 angezeigt

Welcome everybody, today we want to continue with our lecture through the topic of deep

learning and today's topic will be activation functions and convolutional neural networks.

And in particular the changes that we will, or the improvement that we report on today

actually fundamental to the success of deep learning today. So these two concepts are very important and

I think the changes that have been done,

they don't seem that large, but they have a huge influence on the

practicality of deep nets. So this is really

important what we're doing here.

Okay, so we will talk about about two things. One is the activation functions.

This is essentially the non-linearity for each neuron and

we will have a look into those and what you can do in order to get improved numerical stability. And the second topic

will be convolutional neural networks.

If you have questions, don't hesitate to ask them at any time. It's no problem.

Do you have a question?

No?

Too bad. I would. Yes, there's a question. Excellent.

I think it's possible because I also saw uploading the slides should be possible because I also download them from the cloud.

So they have been uploaded but into a different system.

Okay, so we will put them, we'll try to put them on Stoudong. Then we use the same mechanism for synchronization.

Okay, excellent.

Good. So,

activation functions, right? So what is this with the activation? You know, we occasionally

link what we are doing in terms of computing also to

biological relations and therefore we have a couple of slides on the biological

activation. You know that you have a neuron that is connected to

to other neurons and has these many, many, many inputs and then

when the particular neuron is activated it fires and it can also then connect and inform other neurons.

So how is this done? And

there we already see some difference to what we are actually doing because first of all in biological neurons

this process is

happening over time. So there's

activations and

then neurons keep firing at certain rates and

when we talk about neurons, most of the time we have those feed-forward networks and we just have one stimuli or

stimulus that is processed through the network and it's not time dependent,

but it's only when we have recurrent neural networks or things like that. But there's no firing at a certain

repetition. But what is similar is that you have some kind of nonlinear behavior.

This is also why we have the nonlinearities in there and then what essentially happens is that you have some stimulus

and there is a threshold and only if the threshold is

exceeded the neuron actually fires and if it's not exceeding then the neuron will not fire.

So what can also happen in the biological case is that if you have a very strong activation that it fires much faster.

That it fires multiple times and so on. So there's quite a few differences, but the idea of having some nonlinearity in there

then also that takes some time also to recover

is something that happens in a biological neuron.

Very loosely associated to this we have our activation functions and then of course

there's also a lot of things here happening. You know you have this myelin sheath that is essentially

insulating and then you have the ion channels and so on. So it's not as easy as we like to model this in our

artificial neural networks. So the process is way more complicated

in order to actually get this electrical activation

communicated and also to other cells.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:25:29 Min

Aufnahmedatum

2019-05-16

Hochgeladen am

2019-05-17 00:39:05

Sprache

en-US

Deep Learning (DL) has attracted much interest in a wide range of applications such as image recognition, speech recognition and artificial intelligence, both from academia and industry. This lecture introduces the core elements of neural networks and deep learning, it comprises:

  • (multilayer) perceptron, backpropagation, fully connected neural networks

  • loss functions and optimization strategies

  • convolutional neural networks (CNNs)

  • activation functions

  • regularization strategies

  • common practices for training and evaluating neural networks

  • visualization of networks and results

  • common architectures, such as LeNet, Alexnet, VGG, GoogleNet

  • recurrent neural networks (RNN, TBPTT, LSTM, GRU)

  • deep reinforcement learning

  • unsupervised learning (autoencoder, RBM, DBM, VAE)

  • generative adversarial networks (GANs)

  • weakly supervised learning

  • applications of deep learning (segmentation, object detection, speech recognition, ...)

Tags

filters layer functions function classification convolutions
Einbetten
Wordpress FAU Plugin
iFrame
Teilen